Dockerized High Availability Configuration
Introduction
Actions Pro images are available for customers in AWS ECR. To configure the necessary components and services, a Docker Compose file is used. This document provides step-by-step instructions for installing and running Actions Pro in a 3-node cluster.
Node Specifications
The setup requires three nodes, all of which must meet the Specifications detailed in the installation instructions.
Each node must be able to communicate with the others using their domain names or IP addresses.
System Configuration
To allow the Elasticsearch container to allocate sufficient virtual memory areas, increase the vm.max_map_count kernel parameter on all nodes:
- Edit the 
/etc/sysctl.conffile and add the following line:vm.max_map_count=262144 - Save the file and apply the changes with:
sysctl -p 
Required Open Ports
Primary Node:
- 3306 – MariaDB
 - 4004, 15672 – RabbitMQ
 - 5601 – Kibana
 - 8443, 8080, 8005 – Actions Pro Tomcat
 - 9200, 9300 – Elasticsearch
 
Secondary Nodes:
- 4004, 15672 – RabbitMQ ports are only needed on the primary and backup host.
 - 8443, 8080, 8005 – Actions Pro Tomcat
 - 9200, 9300 – Elasticsearch
 
Prerequisites
Ensure that Docker and Docker Compose are installed on all three nodes.
Docker Compose Setup
Node Assignment
- Download and extract 
docker-compose.zipon all three nodes. - Following a naming convention for nodes is assumed. For example: 
- Primary Node: 
node1 - First Secondary Node: 
node2 - Second Secondary Node: 
node3 
 - Primary Node: 
 - The extracted folder contains the following Docker Compose files:
docker-compose-ha-pr-node1.yml(Primary node)docker-compose-ha-se-node2.yml(First secondary node)docker-compose-ha-se-node3.yml(Second secondary node)
 
Additional folders in the extracted folder can be found here.
Service Deployment
- The secondary nodes do not deploy 
MariaDB,Kibana, orrsLogcomponents. These services are removed from the docker compose file of all the secondary nodes. - RabbitMQ is configured in Primary and Backup mode and runs only on 
node1(primary) andnode2(first secondary). 
Configuration
Environment Variables
Actions Pro is configured using properties in the blueprint.properties file. These can be assigned to services in the Docker Compose file under the environment attribute.
Handling Dots in Environment Variables
In a bash shell, . is not a valid character in environment variable names. Replace dots with underscores. Example:
- Original: 
rscontrol.log4j.Loggers.Root.level=DEBUG - Updated: 
rscontrol_log4j_Loggers_Root_level=DEBUG 
Node-Specific Configuration
- The 
LOCALHOSTenvironment variable forrsviewandrscontrolmust match the corresponding domain name or IP address forRSVIEW_NODESandRSCONTROL_NODESin theactionsproenvironment file. - On 
node3, setSERVER_ID=3in theactionsproenvironment file. Increment this for additional nodes. 
The Primary node will host all the services mentioned in the docker compose file.
The second node, node2, will NOT host mariadb, kibana,rslog, services, as they will run only on the primary node.
The third node, node3, will NOT host mariadb, kibana,rabbitmq, and rslog services. 
Updating Domain Names or IP Addresses
To simplify setup, the Docker Compose file, actionspro environment file, and kibana.yml file (located in the config folder on the primary node) are preconfigured with placeholder domain names:
- Primary Node: 
primary-host.domain.com - First Secondary Node: 
secondary1-host.domain.com - Second Secondary Node: 
secondary2-host.domain.com 
Before deployment, replace these placeholders with the actual domain names or IP addresses of your respective host nodes.
Elasticsearch Configuration
- The 
discovery.seed_hostsproperty should contain a comma-separated list of all node domain names or IPs excluding the current node. - Each additional Elasticsearch node must have a unique service name reflected in 
node.nameandcontainer_name. - The 
cluster.initial_master_nodesproperty must list the unique Elasticsearch service names in the cluster. - In the health-check configuration, the 
curlcommand should reference the current node’s domain name or IP. 
On all nodes, the properties kibana_yml_server_host and kibana_yml_server_publicBaseUrl in the actionspro environment file must use the domain name of the primary host. The public base URL (or value of the property kibana_yml_server_publicBaseUrl) should be accessible via a web browser.
Starting the Cluster
Preparation
Ensure that all nodes contain the certs folder with:
keystore.jksandkeystore.PKCS12for Tomcat and Kibana (onnode1only)- A valid license file in the 
licensefolder 
Starting Services
- On the primary node, start all services:
docker compose -f <docker-compose-file.yml> up -d - Simultaneously, on a secondary node, start the Elasticsearch component:
docker compose -f <docker-compose-file.yml> up elasticsearch2 -d - On the primary node, wait for 
rsviewto show a healthy status, then start all components on the secondary nodes:docker compose -f <docker-compose-file.yml> up -d 
Viewing Logs
To monitor logs for a specific service, use:
   docker logs <service-name> -f
Accessing Actions Pro
Once all services are running, Actions Pro can be accessed from any node using:
https://<primary-node-domain>:8443